In all cases, answer c) is the Bayesian answe (see Dienes, in press).


            For question 1, I suspect a majority of researchers have at some time taken a) as their answer in similar cases. They have Bayesian intuitions, but use them with the wrong tools, the only tools apparently available, and tools inappropriate for actually cashing out the intuition. Choice a) is also the answer one might pick by thinking with a meta-analytic mind set, but use of meta-analysis here is complicated by the fact the stopping rule was conditional upon obtaining a significant finding. Arguably, the correct orthodox answer is to regard the data non-significant, as in b). Power may be low, but in effect one committed to that level of Type II error in planning the study. Answer c) spells out the intuition, and Bayes provides the tools for implementing it.


            For question 2, again I suspect many people have decided a) in similar circumstances, because of the Bayesian intuitions in c), and so used the wrong tools for the right reasons. One suspects in many papers the introduction was written entirely in the light of the results. We implicitly accept this as good practice, indeed train our students to do likewise for the sake of the poor reader of our paper. But b) is the correct answer based on the Neyman Pearson approach, and maybe your conscience told you so. But should you be worrying about what might be murky – which really came first, data or hypothesis? – or, rather, about what really matters, whether the predictions really follow from a substantial theory in a clear simple way?


            For question 3, practice may vary depending on whether the author is a believer or sceptic in subliminal perception. After all, there is no strict standard about what counts as a “family” for the sake of multiple testing. There is a pull between accepting the intuition in c) that surely there is evidence for this method, and the realisation that more tests means more opportunities for inferential mistakes. But one should not confuse strength of evidence with the probability of obtaining it (Royall, 1997). Evidence is evidence even if, as one increases the circle of what tests are in the “family”, the probability that some of the evidence will be misleading increases.


            For question 4, many researchers may conclude a), and indeed feel that by taking power into account they are morally ahead of the pack of typical researchers who ignore power. Those schooled in Fisherian intuitions may choose b). A Bayesian analysis forces one to consider the range of effect sizes consistent with a theory. As soon as one considers this question, whether as a Bayesian or not, it will become apparent that the effect obtained in a previous study does not define the lowest effect size one is interested in. Thus, even in studies that do take power into account, likely they do not consider a minimally interesting effect, and likely they use rather low power (80%) – rather low that is, for obtaining evidence that could substantially favour a null hypothesis over the theory of interest. Because orthodox methods are not based on how persuasive you should find evidence, they often allow conclusions that in fact should not be persuasive. And without doing a Bayesian analysis, you have no idea how persuasive they should be.


            Question 5 again illustrates a case where orthodox statistics could produce the same answer as Bayes: One could calculate a confidence interval on raw weight loss and see that it excludes the values one is interested in. In this case, the advantage of the Bayesian approach is that it forces you to consider what range of effects you are really interested in. It forces you to take into account that which is inferentially relevant. Orthodox statistics do not (cf e.g. Kirsch, 2009, "the emperor's new drugs").